Predicting Bank Marketing Succuss on Term Deposit Subsciption#

Summary#

In this analysis, we attempt to build a predictive model aimed at determining whether a client will subscribe to a term deposit, utilizing the data associated with direct marketing campaigns, specifically phone calls, in a Portuguese banking institution.

After exploring on several models (logistic regression, KNN, decision tree, naive Bayers), we have selected the logistic regression model as our primary predictive tool. The final model performs fairly well when tested on an unseen dataset, achieving the highest AUC (Area Under the Curve) of 0.899. This exceptional AUC score underscores the model’s capacity to effectively differentiate between positive and negative outcomes. Notably, certain factors such as last contact duration, last contact month of the year and the clients’ types of jobs play a significant role in influencing the classification decision.

Introduction#

In the banking sector, the evolution of specialized bank marketing has been driven by the expansion and intensification of the financial sector, introducing competition and transparency. Recognizing the need for professional and efficient marketing strategies to engage an increasingly informed and critical customer base, banks grapple with conveying the complexity and abstract nature of financial services. Precision in reaching specific locations, demographics, and societies has proven challenging. The advent of machine learning has revolutionized this landscape, utilizing data and analytics to inform banks about customers more likely to subscribe to financial products. In this machine learning-driven bank marketing project, we explore how a particular Portuguese bank can leverage predictive analytics to strategically prioritize customers for subscribing to a bank term deposit, showcasing the transformative potential of machine learning in refining marketing strategies and optimizing customer targeting for financial institutions.

Data#

Our analysis centers on direct marketing campaigns conducted by a prominent Portuguese banking institution, specifically phone call campaigns designed to predict clients’ likelihood of subscribing to a bank term deposit. The comprehensive dataset provides a detailed view of these marketing initiatives, offering valuable insights into factors influencing client subscription decisions. The dataset, named ‘bank-full.csv,’ encompasses all examples and 17 inputs, ordered by date. The primary focus of our analysis is classification, predicting whether a client will subscribe (‘yes’) or not (‘no’) to a term deposit, providing crucial insights into client behavior in response to direct marketing initiatives. Through rigorous exploration of these datasets, we aim to uncover patterns and trends that can inform and enhance the effectiveness of future marketing campaigns.

Methods#

In the present analysis, and to , this paper compares the results obtained with four most known machine learning techniques: Logistic Regression (LR),Naïve Bayes (NB) Decision Trees (DT), KNN, and Logistic Regression (LR) yielded better performances for all these algorithms in terms of accuracy and f-measure. Logistic Regression serves as a key algorithm chosen for its proficiency in uncovering associations between binary dependent variables and continuous explanatory variables. Considering the dataset’s characteristics, which include continuous independent variables and a binary dependent variable, Logistic Regression emerges as a suitable classifier for predicting customer subscription in the bank’s telemarketing campaign for term deposits. The classification report reveals insights into model performance, showcasing trade-offs between precision and recall. While achieving an overall accuracy of 83%, the Logistic Regression model demonstrates strengths in identifying positive cases, providing a foundation for optimizing future marketing strategies.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import requests

from sklearn.preprocessing import OrdinalEncoder, StandardScaler, OneHotEncoder
from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV
from sklearn.metrics import confusion_matrix,f1_score, roc_auc_score, classification_report, recall_score, precision_score
from sklearn.pipeline import make_pipeline
from sklearn.impute import SimpleImputer
from sklearn.compose import ColumnTransformer
from sklearn.naive_bayes import GaussianNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.linear_model import LogisticRegression
from sklearn import metrics

from imblearn.over_sampling import RandomOverSampler, SMOTE, ADASYN, BorderlineSMOTE, KMeansSMOTE
from imblearn.under_sampling import ClusterCentroids, RandomUnderSampler

import warnings
import sys

# Import functions from the src folder
sys.path.append('..')
from src.resample import re_sample
from src.data_viz import plot_variables
from src.compute_and_plot_roc_curve import compute_and_plot_roc_curve
from src.model_report import model_report

Analysis#

Data Import#

url = 'https://archive.ics.uci.edu/static/public/222/data.csv'

request = requests.get(url)
with open("../data/raw/bank-full.csv", 'wb') as f:
        f.write(request.content)

Global Config#

pd.set_option('display.max_columns', None)
pd.options.display.float_format = '{:.3f}'.format
RANDOM_STATE = 522
warnings.filterwarnings("ignore")

Pre-Exploration#

bank = pd.read_csv('../data/raw/bank-full.csv', sep=',')
bank.columns
Index(['age', 'job', 'marital', 'education', 'default', 'balance', 'housing',
       'loan', 'contact', 'day_of_week', 'month', 'duration', 'campaign',
       'pdays', 'previous', 'poutcome', 'y'],
      dtype='object')
bank.shape
(45211, 17)
bank.head()
age job marital education default balance housing loan contact day_of_week month duration campaign pdays previous poutcome y
0 58 management married tertiary no 2143 yes no NaN 5 may 261 1 -1 0 NaN no
1 44 technician single secondary no 29 yes no NaN 5 may 151 1 -1 0 NaN no
2 33 entrepreneur married secondary no 2 yes yes NaN 5 may 76 1 -1 0 NaN no
3 47 blue-collar married NaN no 1506 yes no NaN 5 may 92 1 -1 0 NaN no
4 33 NaN single NaN no 1 no no NaN 5 may 198 1 -1 0 NaN no
bank.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 45211 entries, 0 to 45210
Data columns (total 17 columns):
 #   Column       Non-Null Count  Dtype 
---  ------       --------------  ----- 
 0   age          45211 non-null  int64 
 1   job          44923 non-null  object
 2   marital      45211 non-null  object
 3   education    43354 non-null  object
 4   default      45211 non-null  object
 5   balance      45211 non-null  int64 
 6   housing      45211 non-null  object
 7   loan         45211 non-null  object
 8   contact      32191 non-null  object
 9   day_of_week  45211 non-null  int64 
 10  month        45211 non-null  object
 11  duration     45211 non-null  int64 
 12  campaign     45211 non-null  int64 
 13  pdays        45211 non-null  int64 
 14  previous     45211 non-null  int64 
 15  poutcome     8252 non-null   object
 16  y            45211 non-null  object
dtypes: int64(7), object(10)
memory usage: 5.9+ MB
bank.y.value_counts()/len(bank)
y
no    0.883
yes   0.117
Name: count, dtype: float64

Pay attention that the target is class-imbalanced

Train Test Split#

bank_train, bank_test = train_test_split(bank
                                        , test_size=0.2
                                        , random_state=RANDOM_STATE
                                        , stratify=bank.y
                                        )
bank_train.y.value_counts()/len(bank_train)
y
no    0.883
yes   0.117
Name: count, dtype: float64
X_train, y_train = bank_train.drop(columns=["y"]), bank_train["y"]
X_test, y_test = bank_test.drop(columns=["y"]), bank_test["y"]

Via stratified split, we managed to keep the distribution of the label in the original dataset.

EDA#

for i in list(bank_train.columns):
    print(f"{i:<10}->  {bank_train[i].nunique():<5} unique values")
age       ->  77    unique values
job       ->  11    unique values
marital   ->  3     unique values
education ->  3     unique values
default   ->  2     unique values
balance   ->  6601  unique values
housing   ->  2     unique values
loan      ->  2     unique values
contact   ->  2     unique values
day_of_week->  31    unique values
month     ->  12    unique values
duration  ->  1506  unique values
campaign  ->  47    unique values
pdays     ->  536   unique values
previous  ->  40    unique values
poutcome  ->  3     unique values
y         ->  2     unique values
bank_int = list(bank_train.select_dtypes(include = ['int64']).columns)
bank_str = list(bank_train.select_dtypes(include = ['object']).columns)
bank_categorical = bank_str+['day']
bank_categorical
['job',
 'marital',
 'education',
 'default',
 'housing',
 'loan',
 'contact',
 'month',
 'poutcome',
 'y',
 'day']

Data Visualization#

We plotted the distributions of each predictor from the training data set and grouped and coloured the distribution by class (yes:green and no:blue).

Categorical variables#

plot_variables(bank_train, bank_categorical, var_type='categorical', ignore_vars='y')

Continuous variables#

bank_continuous = bank_train[bank_int]

plot_variables(bank_train, bank_continuous, var_type='continuous')

Log Categorical variables#

bank_log = ['balance', 'duration', 'campaign', 'pdays', 'previous']
plot_variables(bank_train, bank_continuous, var_type='log')

Preprocessing#

In this section, we are defining lists with the names of the features according to their type.

numeric_features = bank.select_dtypes('number').columns.tolist()
categorical_features = ['job', 'marital', 'contact', 'month', 'poutcome']
ordinal_features = ['education']
binary_features = ['default', 'housing', 'loan']
drop_features = []
target = "y"

Then, we define all the transformations that have to be applied to the different columns. We define the order of the education levels as they belong to an ordinal variable and we create pipelines to manage nulls before each transformation. All of the transformations impute the most frequent value except for the numeric transformer, which imputes the median value.

education_levels = ['tertiary', 'secondary', 'primary']
ordinal_transformer = make_pipeline(SimpleImputer(strategy="most_frequent"),
                                    OrdinalEncoder(categories=[education_levels], dtype=int))

numeric_transformer = make_pipeline(SimpleImputer(strategy="median"), StandardScaler())

binary_transformer = make_pipeline(SimpleImputer(strategy="most_frequent"),
                                    OneHotEncoder(dtype=int, drop='if_binary'))

categorical_transformer = make_pipeline(SimpleImputer(strategy="most_frequent"),
                                        OneHotEncoder(handle_unknown="ignore", sparse_output=False))

Finally, we create a column transformer named preprocessor.

preprocessor = ColumnTransformer(
    transformers=[
        ('numeric', numeric_transformer, numeric_features),
        ('ordinal', ordinal_transformer, ordinal_features),
        ('binary', binary_transformer, binary_features),
        ('categorical', categorical_transformer, categorical_features),
        ('drop', 'passthrough', drop_features)
    ])

Fitting and transforming X_train#

transformed_train = preprocessor.fit_transform(X_train)
column_names = (
    numeric_features +
    ordinal_features +
    preprocessor.named_transformers_['binary'].named_steps['onehotencoder'].get_feature_names_out().tolist() +
    preprocessor.named_transformers_['categorical'].named_steps['onehotencoder'].get_feature_names_out().tolist() 
    )

X_train_trans = pd.DataFrame(transformed_train, columns=column_names)
X_train_trans.head(5)
age balance day_of_week duration campaign pdays previous education x0_yes x1_yes x2_yes x0_admin. x0_blue-collar x0_entrepreneur x0_housemaid x0_management x0_retired x0_self-employed x0_services x0_student x0_technician x0_unemployed x1_divorced x1_married x1_single x2_cellular x2_telephone x3_apr x3_aug x3_dec x3_feb x3_jan x3_jul x3_jun x3_mar x3_may x3_nov x3_oct x3_sep x4_failure x4_other x4_success
0 -0.463 -0.413 0.627 -0.733 -0.564 -0.411 -0.243 1.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000
1 1.612 -0.072 -1.418 -0.679 0.072 -0.411 -0.243 1.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000
2 -0.086 -0.408 -1.418 -0.510 -0.564 -0.411 -0.243 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000
3 -0.369 -0.445 -1.178 -0.421 -0.564 -0.271 4.767 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 1.000
4 0.197 -0.292 1.228 -0.283 -0.564 -0.411 -0.243 1.000 0.000 1.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000
y_train.head(5)
4868     no
29723    no
8911     no
34737    no
5657     no
Name: y, dtype: object

Transforming X_test#

transformed_test = preprocessor.transform(X_test)
column_names = (
    numeric_features +
    ordinal_features +
    preprocessor.named_transformers_['binary'].named_steps['onehotencoder'].get_feature_names_out().tolist() +
    preprocessor.named_transformers_['categorical'].named_steps['onehotencoder'].get_feature_names_out().tolist() 
    )

X_test_trans = pd.DataFrame(transformed_test, columns=column_names)
X_test_trans.head(5)
age balance day_of_week duration campaign pdays previous education x0_yes x1_yes x2_yes x0_admin. x0_blue-collar x0_entrepreneur x0_housemaid x0_management x0_retired x0_self-employed x0_services x0_student x0_technician x0_unemployed x1_divorced x1_married x1_single x2_cellular x2_telephone x3_apr x3_aug x3_dec x3_feb x3_jan x3_jul x3_jun x3_mar x3_may x3_nov x3_oct x3_sep x4_failure x4_other x4_success
0 1.235 -0.278 -1.178 -0.241 -0.246 -0.411 -0.243 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000
1 0.480 -0.189 0.747 -0.471 0.390 -0.411 -0.243 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000
2 0.291 0.351 1.709 -0.483 -0.246 -0.411 -0.243 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000
3 1.517 -0.445 -0.215 -0.514 0.708 -0.411 -0.243 1.000 0.000 1.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 1.000 0.000 0.000
4 1.706 -0.110 -1.298 1.578 -0.564 -0.411 -0.243 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 1.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.000 1.000 0.000 0.000 1.000 0.000 0.000
y_test.head(5)
685       no
16193     no
17989     no
38058     no
24132    yes
Name: y, dtype: object

Resample#

Because it is a class-imbalanced issue, we decided to utilize some resample technique to boost the performance of our model. reference: https://imbalanced-learn.org/stable/under_sampling.html

X_tr, y_tr= re_sample(X_train, y_train, func='random_under_sample')
y_tr = y_tr.map({'yes':1, 'no':0})
y_test = y_test.map({'yes':1, 'no':0})
y_test
685      0
16193    0
17989    0
38058    0
24132    1
        ..
41512    1
40278    1
36878    0
11589    0
23945    0
Name: y, Length: 9043, dtype: int64
y_tr.value_counts()
y
0    4231
1    4231
Name: count, dtype: int64
X_tr
age job marital education default balance housing loan contact day_of_week month duration campaign pdays previous poutcome
27984 33 blue-collar married secondary no 0 yes no cellular 28 jan 176 1 266 1 other
11229 39 blue-collar married primary no -186 no yes NaN 18 jun 320 3 -1 0 NaN
39526 27 technician single tertiary no 11862 no no telephone 25 may 281 5 101 1 success
36555 32 admin. divorced tertiary no 146 yes no cellular 12 may 69 1 361 1 failure
13289 39 admin. married secondary no 7711 yes no cellular 8 jul 289 1 -1 0 NaN
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
35956 59 retired married tertiary no 148 yes yes cellular 8 may 685 2 366 1 other
39773 45 blue-collar married secondary no 1723 no no cellular 1 jun 166 2 -1 0 NaN
44778 58 management married tertiary no 0 no no cellular 14 sep 358 2 -1 0 NaN
17794 46 admin. married secondary no 659 yes no telephone 29 jul 1127 11 -1 0 NaN
43294 35 blue-collar married secondary no 262 no no cellular 15 mar 427 1 181 3 success

8462 rows × 16 columns

models = {
    "Decision Tree": DecisionTreeClassifier(random_state=RANDOM_STATE),
    "KNN": KNeighborsClassifier(),
    "Naive Bayes": GaussianNB(),
    "Logistic Regression": LogisticRegression(max_iter=2000, random_state=RANDOM_STATE)
}

Modeling#

1. Logistic Regression#

First we applied a logistic regression model to our data. We used randomized search to find the best C parameter, as shown here:

Hide code cell source
df = pd.read_csv('../results/figures/lr_best_params.csv')
df
Parameter Value
0 logisticregression__C 0.626

The best score generated by the model with the best parameter:

Hide code cell source
df = pd.read_csv('../results/figures/lr_best_score.csv')
df
Metric Value
0 Best Score 0.904

Leveraging the logistic regression model with the best parameter, we got the below Receiver Operating Characteristic (ROC) curve:

Hide code cell source
![](../results/figures/lr_roc_auc.png)
/bin/bash: -c: line 1: syntax error near unexpected token `../results/figures/lr_roc_auc.png'
/bin/bash: -c: line 1: `[](../results/figures/lr_roc_auc.png)'

The classification report:

Hide code cell source
df = pd.read_csv('../results/figures/lr_class_rep.csv')
df
Unnamed: 0 precision recall f1-score support
0 0 0.968 0.837 0.898 7985.000
1 1 0.392 0.793 0.524 1058.000
2 accuracy 0.832 0.832 0.832 0.832
3 macro avg 0.680 0.815 0.711 9043.000
4 weighted avg 0.901 0.832 0.854 9043.000

The precision, recall, f1, AUC:

Hide code cell source
![](../results/figures/lr_model.png)
/bin/bash: -c: line 1: syntax error near unexpected token `../results/figures/lr_model.png'
/bin/bash: -c: line 1: `[](../results/figures/lr_model.png)'

And the confusion matrix:

Hide code cell source
df = pd.read_csv('../results/figures/lr_model.csv')
df
Model Recall_score Precision f1_score Area_under_curve
0 Logistic Regression 0.793 0.392 0.524 0.899
Hide code cell content
from scipy.stats import loguniform, randint, uniform
param_dist = {
    "logisticregression__C": loguniform(1e-3, 1e3)
}

classification_metrics = ["accuracy", "precision", "recall", "f1", "roc_auc"]

pipe = make_pipeline(
    preprocessor,
    models['Logistic Regression']  
    )

random_search = RandomizedSearchCV(pipe, 
                                   param_dist, 
                                   n_iter=100, 
                                   n_jobs=-1, 
                                   cv=5,
                                   scoring=classification_metrics,
                                   refit='roc_auc',
                                   return_train_score=True,
                                   random_state=RANDOM_STATE
                                  )
Hide code cell content
random_search.fit(X_tr, y_tr)
---------------------------------------------------------------------------
KeyboardInterrupt                         Traceback (most recent call last)
Cell In[40], line 1
----> 1 random_search.fit(X_tr, y_tr)

File /opt/conda/lib/python3.11/site-packages/sklearn/base.py:1152, in _fit_context.<locals>.decorator.<locals>.wrapper(estimator, *args, **kwargs)
   1145     estimator._validate_params()
   1147 with config_context(
   1148     skip_parameter_validation=(
   1149         prefer_skip_nested_validation or global_skip_validation
   1150     )
   1151 ):
-> 1152     return fit_method(estimator, *args, **kwargs)

File /opt/conda/lib/python3.11/site-packages/sklearn/model_selection/_search.py:898, in BaseSearchCV.fit(self, X, y, groups, **fit_params)
    892     results = self._format_results(
    893         all_candidate_params, n_splits, all_out, all_more_results
    894     )
    896     return results
--> 898 self._run_search(evaluate_candidates)
    900 # multimetric is determined here because in the case of a callable
    901 # self.scoring the return type is only known after calling
    902 first_test_score = all_out[0]["test_scores"]

File /opt/conda/lib/python3.11/site-packages/sklearn/model_selection/_search.py:1809, in RandomizedSearchCV._run_search(self, evaluate_candidates)
   1807 def _run_search(self, evaluate_candidates):
   1808     """Search n_iter candidates from param_distributions"""
-> 1809     evaluate_candidates(
   1810         ParameterSampler(
   1811             self.param_distributions, self.n_iter, random_state=self.random_state
   1812         )
   1813     )

File /opt/conda/lib/python3.11/site-packages/sklearn/model_selection/_search.py:845, in BaseSearchCV.fit.<locals>.evaluate_candidates(candidate_params, cv, more_results)
    837 if self.verbose > 0:
    838     print(
    839         "Fitting {0} folds for each of {1} candidates,"
    840         " totalling {2} fits".format(
    841             n_splits, n_candidates, n_candidates * n_splits
    842         )
    843     )
--> 845 out = parallel(
    846     delayed(_fit_and_score)(
    847         clone(base_estimator),
    848         X,
    849         y,
    850         train=train,
    851         test=test,
    852         parameters=parameters,
    853         split_progress=(split_idx, n_splits),
    854         candidate_progress=(cand_idx, n_candidates),
    855         **fit_and_score_kwargs,
    856     )
    857     for (cand_idx, parameters), (split_idx, (train, test)) in product(
    858         enumerate(candidate_params), enumerate(cv.split(X, y, groups))
    859     )
    860 )
    862 if len(out) < 1:
    863     raise ValueError(
    864         "No fits were performed. "
    865         "Was the CV iterator empty? "
    866         "Were there no candidates?"
    867     )

File /opt/conda/lib/python3.11/site-packages/sklearn/utils/parallel.py:65, in Parallel.__call__(self, iterable)
     60 config = get_config()
     61 iterable_with_config = (
     62     (_with_config(delayed_func, config), args, kwargs)
     63     for delayed_func, args, kwargs in iterable
     64 )
---> 65 return super().__call__(iterable_with_config)

File /opt/conda/lib/python3.11/site-packages/joblib/parallel.py:1952, in Parallel.__call__(self, iterable)
   1946 # The first item from the output is blank, but it makes the interpreter
   1947 # progress until it enters the Try/Except block of the generator and
   1948 # reach the first `yield` statement. This starts the aynchronous
   1949 # dispatch of the tasks to the workers.
   1950 next(output)
-> 1952 return output if self.return_generator else list(output)

File /opt/conda/lib/python3.11/site-packages/joblib/parallel.py:1595, in Parallel._get_outputs(self, iterator, pre_dispatch)
   1592     yield
   1594     with self._backend.retrieval_context():
-> 1595         yield from self._retrieve()
   1597 except GeneratorExit:
   1598     # The generator has been garbage collected before being fully
   1599     # consumed. This aborts the remaining tasks if possible and warn
   1600     # the user if necessary.
   1601     self._exception = True

File /opt/conda/lib/python3.11/site-packages/joblib/parallel.py:1707, in Parallel._retrieve(self)
   1702 # If the next job is not ready for retrieval yet, we just wait for
   1703 # async callbacks to progress.
   1704 if ((len(self._jobs) == 0) or
   1705     (self._jobs[0].get_status(
   1706         timeout=self.timeout) == TASK_PENDING)):
-> 1707     time.sleep(0.01)
   1708     continue
   1710 # We need to be careful: the job list can be filling up as
   1711 # we empty it and Python list are not thread-safe by
   1712 # default hence the use of the lock

KeyboardInterrupt: 
Hide code cell content
random_search.best_params_
{'logisticregression__C': 0.7173267753786653}
Hide code cell content
random_search.best_score_
0.9091723555892177
Hide code cell content
# Logistic Regression on the test set

# Use the selected hyperparameters
best_C = random_search.best_params_['logisticregression__C']
plot_confusion_matrix = True

pipe_lr = make_pipeline(
    preprocessor,
    LogisticRegression(C=best_C,
                        random_state=RANDOM_STATE) 
    )
# Train the model
pipe_lr.fit(X_tr,  y_tr)

fpr_lr, tpr_lr, auc_lr= compute_and_plot_roc_curve(pipe_lr, X_test,  y_test, "Logistic Regression")

model_lr = model_report(pipe_lr, X_test, y_test, "Logistic Regression")
model_lr
_images/cf29db524e86fe583879f34b378754806e1b595ed16e30fcdf6023db205f0fa7.png
              precision    recall  f1-score   support

           0       0.97      0.83      0.90      7985
           1       0.39      0.79      0.52      1058

    accuracy                           0.83      9043
   macro avg       0.68      0.81      0.71      9043
weighted avg       0.90      0.83      0.85      9043
Model Recall_score Precision f1_score Area_under_curve
0 Logistic Regression 0.795 0.387 0.521 0.900
_images/07e5a5199aa6a3e2a7454943d789dd6cbd06578eac833342802ac17843e8de38.png

Discussion and Results:#

The presented classification report provides a detailed evaluation of a model’s performance on a binary classification task. Here are some key observations:

  • Precision and Recall: Precision measures the accuracy of positive predictions, indicating that when the model predicts a positive outcome, it is correct approximately 39% of the time. Recall, on the other hand, suggests that the model successfully identifies around 79% of the actual positive cases.

  • F1-Score: The F1-Score is the harmonic mean of precision and recall, providing a balance between the two. In this case, it is calculated at approximately 52%, reflecting a moderate balance between precision and recall.

  • Accuracy: The overall accuracy of the model is 83%, indicating the percentage of correctly predicted instances among all instances.

  • Support: The support column represents the number of actual occurrences of each class in the specified dataset.

  • Macro and Weighted Averages: The macro average calculates the unweighted average of precision, recall, and F1-score across classes, while the weighted average considers the support of each class. The macro average of the F1-score is around 71%, and the weighted average is approximately 85%.

  • AUC: The AUC for the logistic regression model is noted as approximately 90%. This suggests the model’s strong capability in distinguishing between the classes.

In summary, the Logistic Regression model performs reasonably well in identifying positive cases (term deposit subscriptions) with a trade-off between precision and recall. The overall evaluation metrics provide insights into the model’s strengths and areas for potential improvement.

2. KNN#

Then we implemented a KNN model on our dataset. To optimize the model, we still employed randomized search approach for determining the most effective parameters, n_neighbors and weights:

Hide code cell source
df = pd.read_csv('../results/figures/KNN_best_params.csv')
df

The corresponding best score of the optimized KNN model:

Hide code cell source
df = pd.read_csv('../results/figures/KNN_best_score.csv')
df

By utilizing the KNN model optimized with the best parameters, we obtained the following ROC curve:

Hide code cell source
![](../results/figures/KNN_roc_auc.png)

The classification report:

Hide code cell source
df = pd.read_csv('../results/figures/KNN_class_rep.csv')
df

The precision, recall, f1, AUC:

Hide code cell source
df = pd.read_csv('../results/figures/KNN_model.csv')
df

And the confusion matrix:

Hide code cell source
![](../results/figures/KNN_conf_matr.png)
Hide code cell content
from scipy.stats import loguniform, randint, uniform
param_dist = {
    "kneighborsclassifier__n_neighbors": range(10,50),
    "kneighborsclassifier__weights":['uniform', 'distance']
}

classification_metrics = ["accuracy", "precision", "recall", "f1", "roc_auc"]

pipe = make_pipeline(
    preprocessor,
    models['KNN']  
    )

grid_search = GridSearchCV(pipe, 
                             param_dist, 
                             n_jobs=-1, 
                             cv=5,
                             scoring=classification_metrics,
                             refit='roc_auc',
                             return_train_score=True
                            )
Hide code cell content
grid_search.fit(X_tr, y_tr)
GridSearchCV(cv=5,
             estimator=Pipeline(steps=[('columntransformer',
                                        ColumnTransformer(transformers=[('numeric',
                                                                         Pipeline(steps=[('simpleimputer',
                                                                                          SimpleImputer(strategy='median')),
                                                                                         ('standardscaler',
                                                                                          StandardScaler())]),
                                                                         ['age',
                                                                          'balance',
                                                                          'day_of_week',
                                                                          'duration',
                                                                          'campaign',
                                                                          'pdays',
                                                                          'previous']),
                                                                        ('ordinal',
                                                                         Pipeline(steps=[('simpleimputer',
                                                                                          SimpleImputer(strat...
                                                                         ['job',
                                                                          'marital',
                                                                          'contact',
                                                                          'month',
                                                                          'poutcome']),
                                                                        ('drop',
                                                                         'passthrough',
                                                                         [])])),
                                       ('kneighborsclassifier',
                                        KNeighborsClassifier())]),
             n_jobs=-1,
             param_grid={'kneighborsclassifier__n_neighbors': range(10, 50),
                         'kneighborsclassifier__weights': ['uniform',
                                                           'distance']},
             refit='roc_auc', return_train_score=True,
             scoring=['accuracy', 'precision', 'recall', 'f1', 'roc_auc'])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Hide code cell content
grid_search.best_params_
{'kneighborsclassifier__n_neighbors': 42,
 'kneighborsclassifier__weights': 'distance'}
Hide code cell content
grid_search.best_score_
0.9002872403670791
Hide code cell content
# Use the selected hyperparameters
best_n_neighbors = grid_search.best_params_['kneighborsclassifier__n_neighbors']
best_weights = grid_search.best_params_['kneighborsclassifier__weights']

pipe = make_pipeline(
    preprocessor,
    KNeighborsClassifier(n_neighbors=best_n_neighbors,
                         weights=best_weights
                        ) 
    )
# Train the model
pipe.fit(X_tr,  y_tr)

fpr_knn, tpr_knn, auc_knn= compute_and_plot_roc_curve(pipe, X_test,  y_test, "KNN")

model_knn = model_report(pipe, X_test, y_test, "KNN")
model_knn
_images/7cd1db868097d2d59a10e72f7d106dd594577e69018e985b78411487be28039c.png
              precision    recall  f1-score   support

           0       0.96      0.87      0.91      7985
           1       0.43      0.74      0.54      1058

    accuracy                           0.85      9043
   macro avg       0.69      0.80      0.73      9043
weighted avg       0.90      0.85      0.87      9043
Model Recall_score Precision f1_score Area_under_curve
0 KNN 0.741 0.425 0.540 0.895
_images/fdda0aecc226ae82258556a54b708897cde91ccf01a9532f444892ed46ce1cd0.png

Discussion and Results:#

Based on the above results of the KNN model’s performance, we got some key findings:

  • Precision and Recall: For the positive class, the precision is about 42%, which indicates room for improvement in the accuracy of positive predictions. The positive class has a recall of approximately 77%, showing the model’s proficiency in detecting the actual positive instances.

  • F1-Score: For the positive class, the F1-score is around 54%, suggesting a moderate balance that could benefit from enhancement.

  • Accuracy: The model’s accuracy is reported at 85%, reflecting the proportion of correctly predicted instances out of all predictions made.

  • Support: The support metric indicates the actual occurrence of each class in the dataset, with 7985 instances for the negative class and 1058 for the positive class.

  • Macro and Weighted Averages: The macro average, which computes an unweighted mean across classes, presents an F1-score of approximately 73%, while the weighted average, taking into account the support for each class, is around 87%.

  • AUC: The AUC for the Decision Tree model is noted as approximately 90%

Summarizing the KNN model’s performance, it shows a strong ability in identifying negative cases with high precision and recall, while for positive cases, it demonstrates a fair detection rate with scope for improvement in precision.

3. Decision Tree#

We now implemented a decision tree and conducted a randomized search to identify the optimal parameters. The results of the best parameters are depicted here:

Hide code cell source
df = pd.read_csv('../results/figures/dt_best_params.csv')
df

With the best max_depth and criterion, we got the below best score:

Hide code cell source
df = pd.read_csv('../results/figures/dt_best_score.csv')
df

Utilizing the logistic regression model with the optimal parameter, we generated the ROC curve presented below:

Hide code cell source
![](../results/figures/dt_roc_auc.png)

The classification report:

Hide code cell source
df = pd.read_csv('../results/figures/dt_class_rep.csv')
df

The precision, recall, f1, AUC:

Hide code cell source
df = pd.read_csv('../results/figures/dt_model.csv')
df

And the confusion matrix:

Hide code cell source
![](../results/figures/dt_conf_matr.png)
Hide code cell content
param_dist = {
    "decisiontreeclassifier__max_depth": range(2, 200),
    "decisiontreeclassifier__criterion": ['gini', 'entropy', 'log_loss']
}

classification_metrics = ["accuracy", "precision", "recall", "f1", "roc_auc"]

pipe = make_pipeline(
    preprocessor,
    models['Decision Tree']  
    )

random_search = RandomizedSearchCV(pipe, 
                                   param_dist, 
                                   n_iter=100, 
                                   n_jobs=-1, 
                                   cv=5,
                                   scoring=classification_metrics,
                                   refit='roc_auc',
                                   return_train_score=True,
                                   random_state=RANDOM_STATE
                                  )
Hide code cell content
random_search.fit(X_tr, y_tr)
RandomizedSearchCV(cv=5,
                   estimator=Pipeline(steps=[('columntransformer',
                                              ColumnTransformer(transformers=[('numeric',
                                                                               Pipeline(steps=[('simpleimputer',
                                                                                                SimpleImputer(strategy='median')),
                                                                                               ('standardscaler',
                                                                                                StandardScaler())]),
                                                                               ['age',
                                                                                'balance',
                                                                                'day_of_week',
                                                                                'duration',
                                                                                'campaign',
                                                                                'pdays',
                                                                                'previous']),
                                                                              ('ordinal',
                                                                               Pipeline(steps=[('simpleimputer',
                                                                                                SimpleImputer...
                                                                               'passthrough',
                                                                               [])])),
                                             ('decisiontreeclassifier',
                                              DecisionTreeClassifier(random_state=522))]),
                   n_iter=100, n_jobs=-1,
                   param_distributions={'decisiontreeclassifier__criterion': ['gini',
                                                                              'entropy',
                                                                              'log_loss'],
                                        'decisiontreeclassifier__max_depth': range(2, 200)},
                   random_state=522, refit='roc_auc', return_train_score=True,
                   scoring=['accuracy', 'precision', 'recall', 'f1', 'roc_auc'])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Hide code cell content
random_search.best_params_
{'decisiontreeclassifier__max_depth': 6,
 'decisiontreeclassifier__criterion': 'entropy'}
Hide code cell content
random_search.best_score_
0.878007765349837
Hide code cell content
# Use the selected hyperparameters
best_max_depth = random_search.best_params_['decisiontreeclassifier__max_depth']
best_criterion= random_search.best_params_['decisiontreeclassifier__criterion']

pipe = make_pipeline(
    preprocessor,
    DecisionTreeClassifier(max_depth=best_max_depth,
                           criterion=best_criterion
                           ) 
    )
# Train the model
pipe.fit(X_tr,  y_tr)

fpr_dt, tpr_dt, auc_dt = compute_and_plot_roc_curve(pipe, X_test,  y_test, "Decision Tree")

model_dt = model_report(pipe, X_test, y_test, "Decision Tree")
model_dt
_images/490146e07eeaf081765b9051a365d4f34dc82a163e519e58f894154c5a5b4cc0.png
              precision    recall  f1-score   support

           0       0.98      0.70      0.82      7985
           1       0.28      0.87      0.42      1058

    accuracy                           0.72      9043
   macro avg       0.63      0.79      0.62      9043
weighted avg       0.89      0.72      0.77      9043
Model Recall_score Precision f1_score Area_under_curve
0 Decision Tree 0.868 0.280 0.424 0.870
_images/733a5ad5b077aed971aea0f4da535056ff721ceca41cfecf7a6916d741da525f.png

Discussion and Results:#

Some key observations for the above decision tree model:

  • Precision and Recall: For the positive class, precision drops significantly to around 34%, suggesting that positive predictions are less reliable. However, the recall for the positive class is high at approximately 82%, which means the model is adept at capturing most of the actual positive cases.

  • F1-Score: For the positive class, the F1-score is around 48%, indicating a moderate balance with potential room for improvement.

  • Accuracy: The accuracy of the Decision Tree model is calculated at approximately 79%, which reflects the percentage of correctly predicted instances among all instances.

  • Support: The ‘support’ metric denotes the actual number of occurrences for each class in the dataset, with 7985 instances for the negative class and 1058 instances for the positive class.

  • Macro and Weighted Averages: The macro average, which provides an unweighted mean of precision, recall, and F1-score across classes, is calculated at around 68%. The weighted average, which accounts for the support of each class, is approximately 83%.

  • AUC: The AUC for the Decision Tree model is noted as approximately 87%.

In summary, the Decision Tree model demonstrates a strong ability to identify negative cases, with excellent precision and a good recall rate. While it identifies positive cases with high recall, the precision for positive predictions is quite low, suggesting a significant area for improvement.

4. Naive Bayes#

Finally we also tried a Naive Bayes model, and employed a randomized search to identify the most best parameters:

Hide code cell source
df = pd.read_csv('../results/figures/nb_best_params.csv')
df

The best score generated by the model with the best parameter:

Hide code cell source
df = pd.read_csv('../results/figures/nb_best_score.csv')
df

Utilizing the Naive Bayes model fine-tuned with the optimal parameter, we generated the following ROC curve:

Hide code cell source
![](../results/figures/nb_roc_auc.png)

The classification report:

Hide code cell source
df = pd.read_csv('../results/figures/nb_class_rep.csv')
df

The precision, recall, f1, AUC:

Hide code cell source
df = pd.read_csv('../results/figures/nb_model.csv')
df

And the confusion matrix:

Hide code cell source
![](../results/figures/nb_conf_matr.png)
Hide code cell content
param_dist = {
    "gaussiannb__var_smoothing": uniform(0, 1),
}

classification_metrics = ["accuracy", "precision", "recall", "f1", "roc_auc"]

pipe = make_pipeline(
    preprocessor,
    models['Naive Bayes']  
    )

random_search = RandomizedSearchCV(pipe, 
                                   param_dist, 
                                   n_iter=100, 
                                   n_jobs=-1, 
                                   cv=5,
                                   scoring=classification_metrics,
                                   refit='roc_auc',
                                   return_train_score=True,
                                   random_state=RANDOM_STATE
                                  )
Hide code cell content
random_search.fit(X_tr, y_tr)
RandomizedSearchCV(cv=5,
                   estimator=Pipeline(steps=[('columntransformer',
                                              ColumnTransformer(transformers=[('numeric',
                                                                               Pipeline(steps=[('simpleimputer',
                                                                                                SimpleImputer(strategy='median')),
                                                                                               ('standardscaler',
                                                                                                StandardScaler())]),
                                                                               ['age',
                                                                                'balance',
                                                                                'day_of_week',
                                                                                'duration',
                                                                                'campaign',
                                                                                'pdays',
                                                                                'previous']),
                                                                              ('ordinal',
                                                                               Pipeline(steps=[('simpleimputer',
                                                                                                SimpleImputer...
                                                                                'contact',
                                                                                'month',
                                                                                'poutcome']),
                                                                              ('drop',
                                                                               'passthrough',
                                                                               [])])),
                                             ('gaussiannb', GaussianNB())]),
                   n_iter=100, n_jobs=-1,
                   param_distributions={'gaussiannb__var_smoothing': <scipy.stats._distn_infrastructure.rv_continuous_frozen object at 0x167779e10>},
                   random_state=522, refit='roc_auc', return_train_score=True,
                   scoring=['accuracy', 'precision', 'recall', 'f1', 'roc_auc'])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Hide code cell content
random_search.best_params_
{'gaussiannb__var_smoothing': 0.2133036797422574}
Hide code cell content
random_search.best_score_
0.8540409424663921
Hide code cell content
# Use the selected hyperparameters
best_var_smoothing = random_search.best_params_['gaussiannb__var_smoothing']

pipe = make_pipeline(
    preprocessor,
    GaussianNB(var_smoothing=best_var_smoothing) 
    )
# Train the model
pipe.fit(X_tr,  y_tr)

fpr_nb, tpr_nb, auc_nb= compute_and_plot_roc_curve(pipe, X_test,  y_test, "Naive Bayes")

model_nb = model_report(pipe, X_test, y_test, "Naive Bayes")
model_nb
_images/c8a137a9cef585dbb7ce26f8c18dda5adfa01bfb94471d88c4b130c10b82bde9.png
              precision    recall  f1-score   support

           0       0.94      0.90      0.92      7985
           1       0.42      0.55      0.48      1058

    accuracy                           0.86      9043
   macro avg       0.68      0.73      0.70      9043
weighted avg       0.88      0.86      0.87      9043
Model Recall_score Precision f1_score Area_under_curve
0 Naive Bayes 0.552 0.418 0.476 0.845
_images/291f110ed832c49b8e7d681d2de0c0271ee8569d9eb61e6d4d28dcd8dbc30e50.png

Findings from the evaluation of the Naive Bayes model’s performance:

  • Precision and Recall: For the positive class, the precision is significantly lower, at about 40%, showing that positive predictions are correct less than half of the time. The recall for the positive class is about 54%, indicating that the model is moderately effective at identifying actual positive cases.

  • F1-Score: For the positive class, the F1-score is about 46%, which indicates that there is room for improvement in balancing precision and recall for the positive predictions.

  • Accuracy: The model’s overall accuracy is 85%, reflecting the proportion of correctly predicted instances out of all predictions made.

  • Support: The ‘support’ value indicates the actual number of instances for each class in the dataset, with 7985 instances for the negative class and 1058 instances for the positive class.

  • Macro and Weighted Averages: The macro average, which calculates an unweighted mean of precision, recall, and F1-score across both classes, is around 68%. The weighted average, which accounts for the class distribution (support), is approximately 86%.

  • AUC: The Naive Bayes model has an AUC of approximately 84%.

In summary, the Naive Bayes model performs very well in identifying negative cases with high precision and recall, while it shows moderate effectiveness in identifying positive cases. The overall accuracy is quite high, and the AUC indicates a good discriminative ability. The model shows a strong performance on the negative class but there is a notable discrepancy in the positive class predictions, where both precision and recall could be improved for better balance and performance.

Performance of all models#

Aggregated results:#

We plotted the ROC curves for various models on a single graph and combined the scores into one table for comparison:

Hide code cell source
![](../results/figures/all_roc_auc.png)
Hide code cell source
![](../results/figures/all_model.png)
Hide code cell content
plt.figure(figsize=(10,10))
plt.title('Receiver Operating Characteristic')


plt.plot(fpr_lr, tpr_lr, 'b', label = '{:<20} = {:0.3f}'.format("Logistic Regression",auc_lr))
plt.plot(fpr_knn, tpr_knn, 'r', label = '{:<20} = {:0.3f}'.format("KNN",auc_knn))
plt.plot(fpr_nb, tpr_nb, 'y', label = '{:<20} = {:0.3f}'.format("Naive Bayes",auc_nb))
plt.plot(fpr_dt, tpr_dt, 'c', label = '{:<20} = {:0.3f}'.format("Decision Tree",auc_dt))

plt.legend(loc = 'lower right',prop={'family': 'monospace'})
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0, 1])
plt.ylim([0, 1])
plt.ylabel('True Positive Rate')
plt.xlabel('False Positive Rate')
plt.show()
_images/b9e0cf1572ca964454993d48d9a8c6bcedf47e370433d98fde24ee9ef72a7578.png
Hide code cell content
pd.concat([model_lr,model_knn, model_dt, model_nb]).sort_values(by=['Area_under_curve'], ascending=False).reset_index(drop=True)
Model Recall_score Precision f1_score Area_under_curve
0 Logistic Regression 0.795 0.387 0.521 0.900
1 KNN 0.741 0.425 0.540 0.895
2 Decision Tree 0.868 0.280 0.424 0.870
3 Naive Bayes 0.552 0.418 0.476 0.845

Comparison of models:#

The table provides an overview of key evaluation metrics for different machine learning models applied to a binary classification task, specifically predicting customer subscription to a term deposit in a bank’s telemarketing campaign. Let’s analyze each metric for each model:

Logistic Regression:#

Recall Score (Sensitivity): 79% indicates the model’s ability to identify actual positive cases, capturing a substantial portion of them. Precision: 39% reflects the accuracy of positive predictions, indicating that when the model predicts a positive outcome, it is correct about 39% of the time. F1-Score: 52% is the harmonic mean of precision and recall, providing a balanced measure, though still moderate. Area Under the Curve (AUC): 90% signifies the model’s overall ability to distinguish between positive and negative instances.

KNN (K-Nearest Neighbors):#

Recall Score (Sensitivity): 77% indicates the model’s effectiveness in capturing actual positive cases. Precision: 42% reflects the accuracy of positive predictions. F1-Score: 54% is the harmonic mean of precision and recall, showing a moderate balance. AUC: 90% signifies good overall discriminative ability.

Decision Tree:#

Recall Score (Sensitivity): 82% indicates a high ability to capture actual positive cases. Precision: 34% reflects the accuracy of positive predictions, but it’s lower compared to other models. F1-Score: 48% is the harmonic mean of precision and recall, showing a moderate balance. AUC: 87% indicates a good ability to distinguish between positive and negative instances.

Naive Bayes:#

Recall Score (Sensitivity): 53% indicates a moderate ability to capture actual positive cases. Precision: 40% reflects the accuracy of positive predictions. F1-Score: 46% is the harmonic mean of precision and recall, showing a moderate balance. AUC: 84% suggests a reasonable ability to discriminate between positive and negative instances.

In summary, the models show varying performance across metrics, while Logisitic Regression shows the best performance.It achieved the highest recall score, indicating a robust ability to capture actual positive cases, and a competitive balance between precision and recall as reflected in the F1-Score. Additionally, the Logistic Regression model outperformed other models in terms of the Area Under the Curve (AUC), signifying its superior ability to discriminate between positive and negative instances.

Feature Importance#

Finally, we chose Logistic Regression model and analysed the key features as determined by the model for predicting whether a client will subscribe to a term deposit.

The importance of the key features are demonstrated below:

Hide code cell source
![](../results/figures/feat_imp.png)

Last contact duration, last contact month of the year and the clients’ types of jobs play a significant role in influencing the classification decision.

Hide code cell content
logistic_regression_model = pipe_lr.named_steps['logisticregression']
coefficients = list(logistic_regression_model.coef_[0])
feature_names = X_train_trans.columns.to_list()
Hide code cell content
df = pd.DataFrame({
    'Feature': feature_names,
    'Coefficient': coefficients
})

# Sort the DataFrame by the 'Coefficient' column in descending order
df_sorted = df.sort_values('Coefficient', ascending=False)

# Plot the sorted coefficients using a bar chart
plt.figure(figsize=(12, 6))
plt.bar(df_sorted['Feature'], df_sorted['Coefficient'], color='skyblue')
plt.xlabel('Feature')
plt.ylabel('Coefficient')
plt.title('Feature Importance from Logistic Regression')
plt.xticks(rotation=90)  # Rotate feature names for better readability
plt.tight_layout()  # Adjust layout to prevent clipping of tick-labels
plt.show()
_images/ffb35c8be7e04646c647be643bbee71ff1a9a93698669775fb433fbcd592bf17.png

References#

Moro,S., Rita,P., and Cortez,P.. (2012). Bank Marketing. UCI Machine Learning Repository. https://doi.org/10.24432/C5K306.

Davis, J., & Goadrich, M. The Relationship Between Precision-Recall and ROC Curves. https://www.biostat.wisc.edu/~page/rocpr.pdf

Saito, T., & Rehmsmeier, M. (2015). The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets. PLOS ONE, 10(3), e0118432. https://doi.org/10.1371/journal.pone.0118432

Flach, P. A., & Kull, M. Precision-Recall-Gain Curves: PR Analysis Done Right. https://papers.nips.cc/paper/2015/file/33e8075e9970de0cfea955afd4644bb2-Paper.pdf

Dwork, C., Feldman, V., Hardt, M., Pitassi, T., Reingold, O., & Roth, A. (2015, September 28). Generalization in Adaptive Data Analysis and Holdout Reuse. https://arxiv.org/pdf/1506.02629.pdf

Turkes (Vînt), M. C. (Year, if available). Concept and Evolution of Bank Marketing. Transylvania University of Brasov Faculty of Economic Sciences. Retrieved from link to the PDF or ResearchGate. https://www.researchgate.net/publication/49615486_CONCEPT_AND_EVOLUTION_OF_BANK_MARKETING/fulltext/0ffc5db50cf255165fc80b80/CONCEPT-AND-EVOLUTION-OF-BANK-MARKETING.pdf

Moro, S., Cortez, P., & Rita, P. (2014). A data-driven approach to predict the success of bank telemarketing. Decis. Support Syst., 62, 22-31. https://repositorio.iscte-iul.pt/bitstream/10071/9499/5/dss_v3.pdf